Goto

Collaborating Authors

 sexist bias


AI trained on novels tracks how racist and sexist biases have evolved

New Scientist

Artificial intelligences picking up sexist and racist biases is a well-known and persistent problem, but researchers are now turning this to their advantage to analyse social attitudes through history. Training AI models on novels from a certain decade can instil them with the prejudices of that era, offering a new way to study how cultural biases have evolved over time. Large language models (LLMs) such as ChatGPT learn by analysing large collections of text.


When people say 'people' online they may mostly be thinking about men

New Scientist

When people use gender-neutral words like "people" and "humanity" they tend to be thinking of men rather than women, in reflection of sexism present in many societies, according to an analysis of billions of words published online. The researchers behind the work warn that this sexist bias is being passed on to artificial intelligence models that have been trained on the same text. April Bailey at New York University and colleagues used a statistical algorithm to analyse a collection of 630 billion words contained within 2.96 billion web pages gathered in 2017, including informal text from blogs and discussion forums as well as more formal text written by the media, corporations and governments, mostly in English. They used an approach called word embedding which derives the intended meaning of a word by the frequency it occurs in context with other words. They found that words like "person", "people" and "humanity" are used in contexts that better match the context of words like "men", "he" and "male" than those of words like "women", "she" and "her".

  Country: North America > United States > New York (0.26)

Racial, sexist bias may sneak into AI systems: Study

#artificialintelligence

Washington: Artificial Intelligence systems can acquire our cultural, racial or gender biases when trained with ordinary human language available online, scientists including one of Indian origin have found. In debates over the future of artificial intelligence, many experts think of the new systems as coldly logical and objectively rational. However, researchers have demonstrated how machines can be reflections of us, their creators, in potentially problematic ways. Common machine learning programs, when trained with ordinary human language available online, can acquire cultural biases embedded in the patterns of wording, researchers found. These biases range from the morally neutral, like a preference for flowers over insects, to the objectionable views of race and gender.


AI programs exhibit racist and sexist biases, research reveals

The Guardian

An artificial intelligence tool that has revolutionised the ability of computers to interpret everyday language has been shown to exhibit striking gender and racial biases. The findings raise the spectre of existing social inequalities and prejudices being reinforced in new and unpredictable ways as an increasing number of decisions affecting our everyday lives are ceded to automatons. In the past few years, the ability of programs such as Google Translate to interpret language has improved dramatically. These gains have been thanks to new machine learning techniques and the availability of vast amounts of online text data, on which the algorithms can be trained. However, as machines are getting closer to acquiring human-like language abilities, they are also absorbing the deeply ingrained biases concealed within the patterns of language use, the latest research reveals.